Review Paper on Various Filtering Techniques and Future Scope to Apply These on TEM Images

نویسندگان

  • Garima Goyal
  • Ajay Kumar Bansal
  • Manish Singhal
چکیده

This is a review paper .Review is carried out in two directions. Firstly, a survey is done to know to know the work done on TEM images and secondly to know how and where the basic filters which led to the evolution of number of other filters are being used in number of applications over the years and the effect of these denoising filters on normal images over the years. I. USAGE AND WORK DONE ON TEM IMAGES he field of image processing has made significant progress in the quantitative analysis of biomedical images over the last 20 years. In certain domains, such as brain imaging, scientific papers that test clinical hypotheses using sophisticated image filtering and segmentation algorithms are not uncommon. Compared to the vast amount of research in medical imaging modalities such as MRI and CT, the number of scientific papers on electron microscopy applications in the image processing community has been very limited. Tasdizen proposed an automatic method for estimating the illumination field using only image intensity gradients [1]. The computational analysis of neurons entail their segmentation and reconstruction from TEM images but is challenged by the heavily textured nature of cellular TEM images and typically low signal-to-noise ratios. Tasdizen proposed a new partial differential equation for enhancing the contrast and continuity of cell membranes in TEM images [2]. A microscopy image gets corrupted by noise, which may arise in the process of acquiring the image, or during its transmission, or even during reproduction of the image. Analysis and comparison of various filters were made on TEM image for denoising and was found that the performance of the Wiener Filter after de-noising for all Salt & Pepper, Poisson and Gaussian noise is better than Mean filter and Median filter. Also, the performance of the Median filter after de-noising for all Salt & Pepper noise is better than Mean filter and Wiener filter [3]. II. USE AND THE EFFECT OF VARIOUS DENOISING FILTERING TECHNIQUES OVER THE YEARS In 1984, a method for removing impulse noises from images was proposed whereby the filtering scheme is based on replacing the central pixel value by the generalized mean value of all pixels inside a sliding window. The concepts of thresholding and complementation which are shown to improve the performance of the generalized mean filter are introduced. The threshold is derived using a statistical theory. The actual performance of the proposed filter is compared with that of file commonly used median filter by filtering noise corrupted real images. The hardware complexity of the two types of filters is compared indicating the advantages of the generalized mean filter [4]. By 1988, two algorithms using adaptive-length median filters are proposed for improving impulse-noise-removal performance for image processing. The algorithms achieved significantly better image quality than regular (fixed-length) median filters when the images are corrupted by impulse noise. One of the algorithms, when realized in hardware, requires rather simple additional circuitry. Both algorithms can easily be integrated into efficient hardware realizations for median filters [5]. By the beginning of 1995, a filter with variable window size for removal of impulses while preserving sharpness was proposed.The first one, called the ranked-order based adaptive median fllter (RAMF), is based on a test for the presence of impulses in the center pixel itself followed by the test for the presence of residual impulses in the median filter output. The second one, called the impulse size based adaptive medianfllter (SAMF), is based on the detection of the size of the impulse noise.It is shown that the RAMF is superior to the nonlinear mean L, filter in removing positive and negative impulses while simultaneously preserving sharpness; the SAMF is superior to Lin's adaptive scheme because it is simpler and better performing in removing the high density of impulsive noise as well as non impulsive noise and in preserving fine details. Simulations on standard images confirmed that these algorithms aresuperior to standard median filter [6]. Two more fast algorithms were developed to compute a set of parameters, called Mi's, of weighted median filters for integer weights and real weights, respectively. The Mi's, which characterize the statistical properties of weighted median filters and are the critical parameters in designing optimal weighted median filters, are defined as the cardinality of the positive subsets of weighted median filters. The first algorithm, which is for integer weights, is about four times faster than the existing algorithm. The second algorithm, which applies for real weights, reduces the computational complexity significantly for many applications where the symmetric weight structures are assumed. Applications of these new algorithms include design of optimal weighted filters, computations of the output distributions, the output moments, and the rank selection probabilities, and evaluation of noise attenuation for weighted median filters[7]. In 1996 a novel median-type filter controlled by fuzzy rules was proposed in order to remove impulsive noises on signals T International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 2 ISSN 2250-3153 www.ijsrp.org such as images. The filter was obtained as a weighted sum of the input signal and the output of the median filter, and the weight is set based on fuzzy rules concerning the states of the input signal sequence. Moreover, this weight is obtained optimally by a learning method, so that the mean square error of the filter output for some training signal data can be the minimum. Some results of image processing showed the high performance of this filter [8]. Givoanni in 1997 proposed the use of the median filter (MF) within the Bayesian framework which allowed to develop global methods for both image smoothing and image approximation by the MF ‘roots’. Then a method for solving the approximation problem was proposed, which was based on stochastic optimization with constraints. Results of the proposed method for both simulated and real binary images were illustrated and compared to results from a known deterministic method [9]. New type of adaptive center weighted median filters was developed in year 2000 for impulsive noise reduction of an image without the degradation of an original signal. The weight in this filter is decided by the weight controller based on counter propagation networks. This controller classifies an input vector into some cluster according to its feature and gives the weight corresponding to the cluster. The parameters in the weight controller are adjustable by using the learning algorithm. The degradation of the original signal can be reduced by the proposed technique [10]. Vector median filter suitable for colour image processing was presented in 2001 and was based on a new ordering of vectors in the HSV colour space [11]. Weighted vector median filters (WVMF) emerged as a powerful tool for the non-linear processing of multi-components signals in 2002. These filters are parametrized by a set of N weights the two optimization techniques of these weights for colour image processing were introduced. Both approaches are evaluated by simulations related to the denoising of textured, or natural, colour images, in the presence of impulsive noise. Furthermore, as they are complementary, they are also tested when used together [12]. An effort was made in 2004 for improving the medianbased filter to preserve image details while effectively suppressing impulsive noises and achieved its effect through a summation of the weighted output of the median filter and the related weighted input signal. The weights are set in accordance with the fuzzy rules. In order to design this weight function, a method to partition of the observation vector space and a learning approach are proposed so that the mean square error of the filter output can be minimum. Partition fuzzy filter provided excellent robustness with respect to various percentages of impulse noise in our testing examples and outperformed the present filters of the time in literature [13]. Then was developed the maximum– minimum exclusive median method to estimate the current pixel. Simulation results indicated that the proposed filter impressively outperforms other techniques in terms of noise suppression and detail preservation across a wide range of impulse noise corruption, ranging from 1% to 90% [14]. A new operator was introduced by Yuksel in year 2006 for removing impulse noise from digital images is presented. The proposed operator was a hybrid filter constructed by combining four center-weighted median filters (CWMF) with a simple adaptive neuro-fuzzy inference system (ANFIS). The results showed that the proposed operator significantly outperforms the other operators and efficiently removes impulse noise from digital images without distorting image details and texture [15]. In the same direction a new adaptive center weighted median (ACWM) filter was proposed in 2007 for improving the performance of median-based filters, preserving image details while effectively suppressing impulsive noiseThe noise filtering procedure is progressively applied through several iterations so that the mean square error of the output can be minimized [16]. Quing Hua Hang in 2008 showed that the median filters can be used for reducing interpolation error and improving the quality of 3D images in a freehand 3D ultrasound (US) system.Compared with the voxel nearest-neighbourhood (VNN) and distanceweighted (DW) interpolation methods, the four median filters reduced the interpolation error by 8.0–24.0% and 1.2–21.8%, respectively, when 1/4 to 5 [17]. The original switching median filter cannot detect the noise pixel whose value is close to its neighbors if the threshold is designed for emphasizing the detail preservation , so in 2009 was modified by adding one more noise detector to improve the capability of impulse noise removal based on the rank order arrangement of the pixels in the sliding window [18]. Yakup in 2010 showed that the performances of recursive impulse noise filters can be improved by the use of image rotation and fuzzy processing [19]. A two-phase median filter based iterative method for removing random-valued impulse noise was proposed in 2010. Simulation results indicated that the proposed method performs better than many well-known methods while preserving its simplicity [20].Use of Median Filters are extended in year 2011 for denoising infrared images. Ozen in 2011 showed that Median filter can be used in fingerprint recongnition algorithm [21]. Zhouping recently used medina gaussian filtering framework for noise removal in X-Rray microscopy image [22].Directional weighted median filter is modified for denoising salt and pepper noise corrupted image [23].Faster approach for noise reduction in infrared image is shown recently in January [24]. In 1961, wiener spectra was inference from one dimensional measurements An example wass given of the use of this concept to extrapolate one-dimensional background measurements for calculation of discrimination for a scanning system [25]. A technique in 1969 for experimentally obtaining a system function satisfying the Wiener minimum mean squared error criterion was presented [26]. Wiener filtering assumes knowledge of the signal and noise autocorrelations or spectral densities. When this information is only approximately known, an optimum bounding filter can be designed for the Wiener problem. In 1975, Nahi, described the design of a filter in which the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator generates a bound on the unavailable actual error covariance and prevents its apparent divergence. The bounding filter can be designed to be of lower order than the Wiener filters associated with each possible set of signal and noise spectral density. Conditions for the design of the optimum (minimum meansquare-error) bounding filter within a permissible class of solutions are discussed. The same approach to the design of International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 3 ISSN 2250-3153 www.ijsrp.org bounding filters can be applied to a K/B filter version of the Wiener problem [27]. In 1979, general conditions are given for the optimality of non-linear Wiener filters which minimize the mean-square difference between the desired and actual filter outputs, which are a generalization of the Wiener–Hopf equation are applied to the Gaussian case and the kernels of the optimum realizable and unrealizable systems were derived [28]. For the estimation of a signal observed with additive white noise, Potter in 1991 showed that the optimum linear least-squares filter constrained to have its impulse response time-limited to the interval [0,T] satisfies a truncated version of the Wiener-Hopf equation [29]. In 1983, the performance of Wiener filtering under spectral uncertainty was presented. For a variety of spectral uncertainty models the Wiener filter was shown to have undesirable sensitivity to even small deviations from those signal and noise spectral densities which were used to design the filter [30]. In 1990, Pancorbo showed the use of circular-aperture-like point spread function to restore scanning tunneling microscopy (STM) images [31]. The sensitivity of the inverse filter to noise is often thought to be the reason that inverse filter restorations of motion-blurred images are normally dominated by errors. In 1991 it was showed that even in the absence of noise, there is a large error component, called the edge error, that arises due to the fact that real images seldom have the periodicity implicitly assumed by discrete Fourier transform operation. The best restorations were obtained by subjecting the windowed-blurred image to a Wiener filter of large signal-to-noise ratio [32]. Use of wiener filter design for molecular bone imaging was introduced in 1993 [33]. A cumulant (higher-order statistics) based meansquare-error (MSE) criterion for the design of Wiener filters when both the given wide-sense stationary random signal x(n) and the desired signal d(n) are non-Gaussian and contaminated by Gaussian noise sources and was theoretically shown that the designed Wiener filter associated with the proposed criterion is identical to the conventional correlation (second-order statistics) based Wiener filter as if both x(n) and d(n) were noise-free measurements [34]. A new method based on transforming the Poisson noise into Gaussian additive noise, filtering the projections in blocks through the Wiener filter and performing the inverse transformation was presented. Results with real data indicated that this method gives superior results, as compared to conventional backprojection with the ramp filter, by taking into consideration both resolution and noise, through a mean square error criterion [35]. In order to enhance the defect in relation to background noise of large grained materials different algorithms have been developed. Wiener filtering techniques have proved to be efficient for the SNR enhancement of ultrasonic signals coming from highly scattering materials. These processing algorithms are based on designing a filter that has large gain at frequencies where the SNR is high and low gain at frequencies where SNR is small. However, this technique does not consider two important ultrasonic effects: the finite-time duration of the flaw UT signal coming from a defect and the distortion of the frequency components of the traveling wave-front due to the dispersion.In this work, a time–frequency Wiener filter is proposed in 2002 that takes into account these two characteristics. Experimental results are presented, showing that the proposed time–frequency algorithm has an excellent performance on SNR enhancement [36]. Standard Wiener filtering formulation requires an iterative estimation of the clean speech spectrum. A non iterative faster algorithm was propsed in 2004, employing a time-varying noise suppression factor which is based on the frame-by-frame SNR, thereby the ability to suppress those parts of the degraded signal, where speech is not likely to be present and not to suppress, and hence not to distort the speech segments as much.Proposed method also outperformed well-known minimum mean-square error (MMSE) short-time spectral amplitude estimator technique in terms of subjective quality [37]. Khireddine in 2007 showed that the Wiener filter is a solution to the restoration problem based upon the hypothesized use of a linear filter and the minimum mean-square (or rms) error criterion [38]. A spectral filtering method based on hybrid wiener filter was presented in 2009 for speech enhancement [39]. Noise reduction is often formulated as a linear filtering problem in the frequency domain. With this formulation, the core issue of noise reduction becomes how to design an optimal frequency-domain filter that can significantly suppress noise without introducing perceptually noticeable speech distortion. While higher-order information can be used, most existing approaches use only second-order statistics to design the noise-reduction filter because they are relatively easier to estimate and are more reliable. Jacob in 2010 discussed the design of optimal and suboptimal noise reduction filters using both the variance and pseudo [40]. Juang utilized the Wiener filtering algorithm with pseudo-inverse technique with the key idea that the Wiener filtering algorithm can be used to process the given ultrasound signal by making the filtering less sensitive to slight changes in input conditions. And investigated the possibility of employing this approach for pre-processing ultrasound image application. When compared with median filter, mean filter and adaptive filter; the results revealed that the proposed method has the best noise filtering capability than other three methods [41]. Variation of Wiener filter was found useful for a variety of single-particle cryo-EM applications, including 3D reconstruction [42]. Recently, sub-band cross-correlation compensated Wiener filter combined with harmonic regeneration is used for speech enhancement [43], a combined multi channel wiener filter based noise reduction and dynamic range compression in hearing aids was presented [44]. In 1991 Fresh studied the wavelet transforms of self-similar random processes, of the kind assumed in the Kolmogorov (1941) theory of turbulence. It showed that, after suitable rescaling, the wavelet transform at a given position becomes a stationary random function of the logarithm of the scale argument in the transform [45]. An optical implementation of the wavelet and inverse wavelet transforms was introduced in 1992. Appropriate wavelets and their corresponding band-pass filters were selected for optical processing. A multichannel optical processing system with two gratings was set up to obtain image representation and image reconstruction [46].The application of the wavelet transform in the determination of peak intensities in flow-injection analysis was studied in 1992 with regard to its properties of minimizing the effects of noise and baseline drift. The results indicated that for white noise and a favourable peak International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 4 ISSN 2250-3153 www.ijsrp.org shape a signal-to-noise ratio of 2 can be tolerated at the 5% error level, which means that a significant reduction in the detection limit can be obtained in comparison with the classical signalprocessing methods. In this respect significant differences were observed for pure Gaussian and exponentially modified Gaussian peaks [47]. A non-parametric algorithm for detecting and locating corners of planar curves was proposed by Lee in 1993 .The algorithm wass based on the multiscale wavelet transform of the orientation of the curve which can effectively utilize both the information of local extrema positions and magnitudes of the transform results. Experiments depicted that our detector is more effective than the single scale corner detectors, while is more efficient than the multiscale corner detector by Rattarangsi and Chin [48]. The wavelet transform gives information in both spatial and frequency domains and is a very useful tool for describing the hierarchical structures. Antoine showed that deconvolution can be done using filtered wavelet coefficients. By computing the wavelet from the point spread function, a new transform algorithm was found [49]. The basic concepts of the discrete wavelet transform (DWT) and the wavelet packet transform (WPT) were presented, illustrated and then applied to real and simulated signals. Different approaches to the WPT coefficients selection aiming at signal compression and denoising were described [50]. A method to optimize the parameters used in signal denoising in the wavelet domain was presented in 1999, based on cross-validation (CV) procedure, permits to select the best decomposition level and the best wavelet filter function to denoise a signal in the discrete wavelet domain [51]. The effectiveness of wavelet-based algorithms for data recovery was considered and a novel method based on coefficient de-noising according to WienerShrink method of wavelet thresholding was proposed in 1999.Simulation results highlighted the advantages of the de-noising method over the classical approaches based on the mean square error criterion [52]. In 2001 Dong applied Discrete wavelet transform to the electrochemical noise analysis (ENA). The experimental results demonstrated that the DWT could improve the calculation of the classic noise resistance Rn and spectrum noise resistance Rsn(f) because it could remove the low frequency trend coupled in the potential or current fluctuations very well [53]. The presence of film grain often imposes the crucial quality choice between film enlargement and speed. An automatic technique was presented for reducing the amount of grain on film images. The technique reduced the noise by thresholding the wavelet components of the image with parameterised family of functions obtained with an initial training on a set of images [54]. In 2009 M.S. Reis introduced the methods of Fourier and wavelet analysis for enhancing the signal-to-noise ratio in typical chemometric and other measured data. Fourier analysis has been popular for many decades but is best suited for enhancing signals where most features are localized in frequency.In contrast, wavelet analysis is appropriate for signals that contain features localized in both time and frequency. It also retains the benefits of Fourier analysis such as orthono, mality and computational efficiency.Practical algorithms for off-line and on-line denoising are described and compared via simple examples. These algorithms can be used for off-line or on-line data and can remove Gaussian as well as non-Gaussian noise.[55]. A New Wavelet-based image denoising using undecimated discrete wavelet transform and least squares support vector machine was proposed in 2010. Extensive experimental results demonstrated that this method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art denoising techniques [56]. In Medical diagnosis operations such as feature extraction and object recognition will play the key role. These tasks will become difficult if the images are corrupted with noise. Many of the wavelet based denoising algorithms use DWT (Discrete Wavelet Transform) in the decomposition stage are suffering from shift variance and lack of directionality. To overcome a denoising method was proposed which uses dual tree complex wavelet transform to decompose the image and shrinkage operation to eliminate the noise from the noisy image. In the shrinkage step we used semi-soft and stein thresholding operators along with traditional hard and soft thresholding operators and verified the suitability of dual tree complex wavelet transform for the denoising of medical images. The results proved that the denoised image using DTCWT (Dual Tree Complex Wavelet Transform) have a better balance between smoothness and accuracy than the DWT and less redundant than UDWT (Undecimated Wavelet Transform). SSIM (Structural similarity index measure) along with PSNR (Peak signal to noise ratio) were used to assess the quality of denoised images [57]. An iterative fuzzy clustering technique was developed in 1985 for image segmentation. It wass believed that this method represented an image segmentation scheme which can be used as a preprocessor for a multivalued logic based computer vision system [58]. T.L. Starting from ideas due to Zadeh and Sugeno Hans suggested two methods to transfer information given by fuzzy observations to fuzzy sets on the parameter region of a given explicit functional relationship: expected cardinality and fuzzy expectation [59]. The relationship between fuzzy information and Shannon's information based on the information equivalence between gray-tone images and half-tone images. Using this, the fuzzy membership function for gray level images was determined experimentally in 1988.These results showed that the membership function of gray levels has the form of an asymmetric S-function [60]. The formal rules which describe the behavior of a ‘globally fuzzy’ technique for image processing were discussed in 1994 [61]. Lee by the end of 1995 showed that how fuzzy reasoning techniques can be applied on smoothing filters design. It used the fuzzy concept to decide whether a pixel in an image is a noise one or not in order to achieve maximum noise reduction in uniform areas and preserve details as well [62].In 1996, Kim showed that an additive fuzzy system can learn ellipsoidal fuzzy rule patches from a new pseudo-covariation matrix or measure of alpha-stable covariation. Mahalanobis distance gives a joint set function for the learned if-part fuzzy sets of the if-then rules. The joint set function preserves input correlations that factored set functions ignore. Competitive learning tunes the local means and pseudo-covariations of the alpha-stable statistics and thus tunes the fuzzy rules. Then the covariation rules can both predict nonlinear signals in impulsive noise and filter the impulsive noise in time-series data. The fuzzy system filtered such noise better than did a benchmark radial basis neural network [63]. International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 5 ISSN 2250-3153 www.ijsrp.org A novel median-type filter controlled by fuzzy rules was proposed in order to remove impulsive noises on signals such as images.. The filter was obtained as a weighted sum of the input signal and the output of the median filter, and the weight is set based on fuzzy rules concerning the states of the input signal sequence. Moreover, this weight is obtained optimally by a learning method, so that the mean square error of the filter output for some training signal data can be the minimum. Some results of image processing show the high performance of this filter [64]. A fuzzy filter for the removal of heavy additive impulse noise, called the weighted fuzzy mean (WFM) filter, is proposed in 1997. When noise probability exceeds 0.3, WFM gives very superior performance compared with conventional filters when evaluated by mean absolute error (MAE), mean square error (MSE) peak signal-to-noise-rate (PSNR) and subjective evaluation criteria. For dedicated hardware implementation, WFM is also much simpler than the conventional median filter [65]. On the design of multichannel filters, especially in color image restoration, it is not easy to simultaneously achieve three objectives: noise attenuation, chromaticity retention, and edges or details preservation. In 1999, a new class of multichannel filters called adaptive fuzzy hybrid multichannel (AFHM) filters was presented. The AFHM filters are able to effectively inherit the merits of filtering behaviors of these three filters in color image restoration applications. This was the first paper to include human concept to design multichannel filters. Moreover, a faster convergence property of the learning algorithm is investigated to reduce the time complexity of the AFHM filters [66]. The problem of impulsive noise reduction in multichannel images was addressed in 2005. A new filter was proposed based on privileging the central pixel in each filtering window in order to replace it only when it is really noisy and preserve the original undistorted image structures. The new filter based on a novel fuzzy metric, created by combining the mentioned scheme and the fuzzy metric outperformed the classical-order statistics filtering techniques [67]. A new operator for removing impulse noise from digital images was presented. The proposed operator is a hybrid filter constructed by combining four center-weighted median filters (CWMF) with a simple adaptive neuro-fuzzy inference system (ANFIS).The fundamental advantage of the proposed operator over other operators in the literature is that it efficiently removes impulse noise while at the same time effectively preserves image details and texture [68]. The two step filtering method (entitled as Fuzzy Random Impulse Noise Reduction method (FRINR)) consisting of a fuzzy detection mechanism and a fuzzy filtering method to remove (random-valued) impulse noise from corrupted images was presented in 2007 [69].A new impulse noise reduction method for colour images, called histogram-based fuzzy colour filter (HFC), was presented. The HFC filter is particularly effective for reducing high-impulse noise in digital images while preserving edge sharpness. Colour images that are corrupted with noise are generally filtered by applying a greyscale algorithm on each colour component separately. This approach causes artefacts especially on edge or texture pixels. Vector-based filtering methods were successfully introduced to overcome this problem. Stefan in 2007 discussed an alternative technique so that no artefacts are introduced. The main difference between the new proposed method and the classical vector-based methods is the usage of colour component differences for the detection of impulse noise and the preservation of the colour component differences [70,71]. Based on an integration of a simple impulse detector and a robust neuro-fuzzy (RNF) network, an effective impulse noise filter for color images was presented by Wang in 2008. It consisted of two modes of operation, namely, training and testing (filtering). The experimental results demonstrated that this filter not only has the abilities of noise attenuation and details preservation but also possesses desirable robustness and adaptive capabilities. It outperformed other conventional multichannel filters [72]. For the first time, a novel pixel classification method by using fuzzy logic and local gradients was introduced to discriminate noise and noise-free pixels from corrupted images. Then, a switching filtering scheme was employed to noisy images. Both numerical comparisons and perceptual observations are demonstrated in the experimental simulations. It was found that the proposed method achieves a fairly better performance than a number of other existing methods in noise reduction and image restoration [73]. Connected filters are widely used filters in image processing, and their implementation highly benefits from tree representations of images, called max-trees. Extending these filters to fuzzy sets, which may be used to represent imprecision on gray levels in fuzzy gray-level images, requires frequent manipulations of these trees, Givoanni in 2010 proposed efficient algorithms to update tree representations of fuzzy sets according to modifications of the membership values. It was showed that any modification can be reduced to a series of simple changes, where only one pixel is modified at each step [74].A fuzzy filter for the removal of random impulse noise in digital grayscale image sequences was presented b y Tom in 2011. The filter consists of different noise detection and filtering steps, in which the fuzzy set theory is used. This noise detection is based both on spatial and on temporal information and has the aim to prevent the filtering of noise free image pixels. The filtering of the detected noisy pixels is finally performed in a motion compensated way [75]. A novel decision-based fuzzy averaging (DFA) filter consisting of a D–S (Dempster–Shafer) noise detector and a two-pass noise filtering mechanism wass presented. A fuzzy averaging method, where the weights are constructed using a predefined fuzzy set, is developed to achieve noise cancellation. A simple second-pass filter was employed to improve the final filtering performance. Experimental results confirmed the effectiveness of the new DFA filter both in suppressing impulsive noise as well as a mix Gaussian and impulsive noise and in improving perceived image quality [76]. Madhu proposed a new fuzzy-based two-step filter for restoring images corrupted with additive noise. The goal of the first step was to compute the difference between the central pixel and its neighborhood in a selected window and to compute a fuzzy membership degree for each difference value using a Gaussian membership function. Computed fuzzy membership values are appropriately utilized as weights for each pixel and then computes the weighted average representing the modified value for the current central pixel. The second step was used as an augmented step to the first one and its goal is to improve the result obtained in the first step by reducing the noise in the color component differences without destroying the fine details of the International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 6 ISSN 2250-3153 www.ijsrp.org image. The experimental analysis showed that the proposed method gives better results compared to existing advanced filters for additive noise reduction. Both visual, quantitative and qualitative analysis have been done to prove the efficiency and effectiveness of the proposed method [77]. Most of the conventional median filters, process all the pixels without classifying whether it is noisy or noiseless. Recently, a method was proposed based on switching median filter. It consists of two stages namely detection stage and filtering stage. In the detection stage neighbourhood mapping based algorithm is used to detect the corrupted pixels. In the filtering stage, the corrupted pixels are filtered by using fuzzy membership function. The uncorrupted pixels are retained as such. The proposed method is compared with many existing algorithms. The proposed algorithm can restore images which are highly corrupted up to 90% noise density. Simulation results showed that the proposed switching median filter algorithm is better when compared to other existing techniques in terms of visual and quantitative measures such as PSNR, MSE, SSIM using MATLAB [78]. Low-dose CT imaging has been particularly used in modern medical practice for its advantage on reducing the radiation dose to patients. However, reconstructed images will distinctly degenerate along with the decrease of the radiation dose. A resolution is to deal with the noisy projection space by an effective filter. This study was performed to address this problem and a fuzzy-median filter was proposed in this paper according to the properties of the noise of low-dose CT images recently. Simulations also indicated that this spatially variant filter could suppress noise and obviously decrease streak artifacts in reconstructed images [79]. Approximate Gaussian filtering of equidistant data can be obtained by regularizing the data with Tikhonov's second order stabilizing functionals. The correspondence between the resulting cubic spline functions and Gaussian functions was first shown by Poggio. The impulse response arising from cubic spline approximation is however not positive everywhere. As an alternative, approximating cubic splines under tension was considered in 1989 by Johannes and a fast implementation was proposed that requires the same amount of calculations for all values of the spread [80]. The equivalence of minimum-norm least-squares solutions of systems of linear equations and standard iterative methods of solution is well established. On the other hand, while it is generally understood that truncated iteration is a form of regularization, comparatively few papers have formalized the relationship between direct methods of regularization and truncated iteration. A brief review of such papers was presented. It was proved that solutions by direct regularization are in fact identical to solutions of a certain type of truncated-iterative method, and conversely. This equivalence is proved by construction for a very general form of regularization method in which the coefficient matrix has full rank and is rectangular 1990 [81]. A global edge detection algorithm based on variational regularization was presented and analysed. The algorithm can also be viewed as an anisotropic diffusion method. These two quite different methods are thereby unified from the original outlook. This puts anisotropic diffusion, as a method in early vision, on more solid grounds; it is just as well founded as the well-accepted standard regularization techniques. The unification also brings the anisotropic diffusion method an appealing sense of optimality, thereby intuitively explaining its extraordinary performance [82]. In 1992, a nonlinear regularized iterative image restoration algorithm was proposed, according to which no prior knowledge about the noise variance is assumed. The algorithm resulted from a set-theoretic regularization approach, where bounds of the stabilizing functional and the noise variance, which determine the regularization parameter, are updated at each iteration step. Sufficient conditions for the convergence of the algorithm, as well as an optimality criterion for the regularization parameter, are derived [83]. Layered architecture is proposed for solving a class of regularization problems in image processing. There are two major hurdles in the implementation of regularization filters with second or higher order smoothness constraints: (a)Stability: With second or higher order constraints, a direct implementation of a regularization filter necessitates negative conductance which, in turn, gives rise to stability problems. (b)Wiring Complexity: A direct implementation of an N-th order regularization filter requires wiring between every pair of k-th nearest nodes for all k, 1 ≤ k ≤ N. Even though one of the authors managed to layout an N = 2 chip, the implementation of an N ≥ 3 chip would be an extremely difficult, if not impossible, task. In 1993 the regularization filter architecture was which requires no negative conductance; and necessitates wiring only between nearest nodes [84]. In 1995, an improvement to the choice of the regularization parameter involved in a deconvolution procedure was proposed. It was based on a statistical model allowing a good estimation of the spectral signal-to-noise ratio [85]. In 1998, restoration of images that have several channels of information was considered. It used a probabilistic scheme which proved rather useful for image restoration and to incorporate into it an additional term which results in a better correlation between the color bands in the restored image. Results obtained so far are good; typically, there is a reduction of 20 to 40% in the mean square error, compared to standard restoration carried out separately on each color band [86]. In 1999, a modified version of classical regularization techniques. Instead of using regularization in order to reduce the measurement noise effect of cancelling the inverse filter singularities, and to restore the original signal, a prefiltering was performed before the regularization. This prefiltering was obtained by using a Wiener filter based on a particular modelization of the signal to be restored [87]. Hong introduced two new edge-preserving image compression approaches based on the wavelet transform and iterative constrained least square regularization approach in 2000, They utilize the edge information detected from the source image as a priori knowledge for the subsequent reconstruction. In addition, one of the approaches makes use of the spatial characteristics of wavelet coded images to enhance its restoration performance [88]. An inverse ill-posed problem was considered in 2002 coming from the area of dynamic magnetic resonance imaging (MRI), where high resolution images must be reconstructed from incomplete data sets collected in the Fourier domain. The behavior of some regularization methods such as the truncated singular value decomposition (TSVD), the Lavrent'yev regularization method and conjugate gradients (CG) type iterative methods were analyzed [89]. Color image processing is International Journal of Scientific and Research Publications, Volume 3, Issue 1, January 2013 7 ISSN 2250-3153 www.ijsrp.org an essential issue in computer vision. Variational formulations provide a framework for color image restoration, smoothing and segmentation problems. The solutions of variational models can be obtained by minimizing appropriate energy functions and this minimization is usually performed by continuous partial differential equations (PDEs). The problem is usually considered as a regularization matter which minimizes a smoothness plus a regularization term. In 2007, Olivier proposed a general discrete regularization framework defined on weighted graphs of arbitrary topologies which can be seen as a discrete analogue of classical regularization theory. The smoothness term of the regularization uses a discrete definition of the p-Laplace operator. With this formulation, families of fast and simple anisotropic linear and nonlinear filters which do not involve PDEs were used [90]. In 2009, for image restoration, edge-preserving regularization method was used to solve an optimization problem whose objective function has a data fidelity term and a regularization term, the two terms are balanced by a parameter λ. In some aspect, the value of λ determines the quality of images. A new model to estimate the parameter and propose an algorithm to solve the problem was established. The quality of images was improved by dividing it into some blocks [91]. Non-blind image deconvolution is a process that obtains a sharp latent image from a blurred image when a point spread function (PSF) is known. However, ringing and noise amplification are inevitable artifacts in image deconvolution since perfect PSF estimation is impossible. The conventional regularization to reduce these artifacts cannot preserve image details in the deconvolved image when PSF estimation error is large, so strong regularization is needed. A non-blind image deconvolution method was proposed which preserves image details, while suppressing ringing and noise artifacts by controlling regularization strength according to local characteristics of the image [92]. Stochastic regularized methods are quite advantageous in super-resolution (SR) image reconstruction problems. In the particular techniques, the SR problem is formulated by means of two terms, the data-fidelity term and the regularization term. The experimentation is carried out with the widely employed L2, L1, Huber and Lorentzian estimators for the data-fidelity term. The Tikhonov and Bilateral (B) Total Variation (TV) techniques are employed for the regularization term. The extracted conclusion is that in case the potential methods present common data-fidelity or regularization term, and frames are noiseless, the method which employs the most robust regularization or data-fidelity term should be used [93]. III. EFFECT OF DENOISING FILTERS ON NORMAL IMAGES

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Adaptive Hierarchical Method Based on Wavelet and Adaptive Filtering for MRI Denoising

MRI is one of the most powerful techniques to study the internal structure of the body. MRI image quality is affected by various noises. Noises in MRI are usually thermal and mainly due to the motion of charged particles in the coil. Noise in MRI images also cause a limitation in the study of visual images as well as computer analysis of the images. In this paper, first, it is proved that proba...

متن کامل

On the use of Textural Features and Neural Networks for Leaf Recognition

for recognizing various types of plants, so automatic image recognition algorithms can extract to classify plant species and apply these features. Fast and accurate recognition of plants can have a significant impact on biodiversity management and increasing the effectiveness of the studies in this regard. These automatic methods have involved the development of recognition techniques and digi...

متن کامل

تشخیص چهره با استفاده از PCA و فیلتر گابور

Methods for face recognition which are based on face structure are among techniques without supervision and produce unfavorable results in the presence of linear changes in images. PCA is a linear transform and a powerful tool for data analysis but does not produce good results for face recognition when there are non-linear changes resulting from changes in position, intensity and gesture in th...

متن کامل

Emerging Optical CDMA Techniques and Applications

In this paper we present an in-depth review on the trends and the directions taken by the researchers worldwide in Optical Code Division Multiple Access (OCDMA) systems. We highlight those trends and features that are believed to be essential to the successful introduction of various OCDMA techniques in communication systems and data networks in near future. In particular we begin by giving a c...

متن کامل

Comprehensive Analysis of Dense Point Cloud Filtering Algorithm for Eliminating Non-Ground Features

Point cloud and LiDAR Filtering is removing non-ground features from digital surface model (DSM) and reaching the bare earth and DTM extraction. Various methods have been proposed by different researchers to distinguish between ground and non- ground in points cloud and LiDAR data. Most fully automated methods have a common disadvantage, and they are only effective for a particular type of surf...

متن کامل

Automatic Interpretation of UltraCam Imagery by Combination of Support Vector Machine and Knowledge-based Systems

With the development of digital sensors, an increasing number of high-resolution images are available. Interpretation of these images is not possible manually, which necessitates seeking for practical, fast and automatic solutions to solve the environmental and location-based management problems. The land cover classification using high-resolution imagery is a difficult process because of the c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013